All right. Thank you for the introduction. Thank you for inviting me to give this talk.
I would like to tell you something about mathematical uncertainty quantification. And as the audience
is quite diverse with the age ranging between zero and I don't know how old exactly Eberhard
is, I will try to give you a very intuitive and a very hopefully a very nice introduction.
I will spend a lot of time with laying the groundwork from I hope the first principles
and my goal is to not lose anyone at least during the first 30 to 40 minutes.
Okay, so what is uncertainty quantification? I'd like to start with mathematics in general.
Mathematics at least a non-trivial subset of it is about computation. And computation
means we have some object and we want to obtain quantifiable information about it. So for
example if we have a geometrical object we might be interested in computing its mass
or its volume or something like that. Or if we have an object with a certain heat distribution
if we wait for an hour what will the heat distribution be after that time. Or if you
are a manufacturer and you try to get a new product to make a new product there might
be certain quantities of interest to you like the demand that your product will face on
the market, the price elasticity with your consumers or the production cost. And given
those parameters you can compute if you have full information the optimal amount of those
products and the optimal price to set in order to maximize your profit. So computing is just
using your parameters and applying some mapping to it and then you have information. Now this
error of course might be very non-trivial to do in practice. So solving PDEs numerically
is hard. There are a lot of researchers concentrating on this type of error. In optimization theory
you might have very difficult optimization problems where you have to use very sophisticated
algorithms but in this talk I will just use this as a given and I just say errors like
this we can always compute in some way or the other if we have enough time and smart
people to implement those algorithms. Now if we do not know exactly what the demand
in our product will be and that's almost never the case with a new product because you don't
know if there has never been an iPhone you don't know how many people will buy your iPhone
but you might have done some market research and you might have a rough idea of the demand
that your product will face and such a rough idea can be condensed into a probability distribution.
So you might say well with 40% probability I will sell exactly one iPhone and with 60%
probability I will sell two iPhones. That's my belief about reality that will happen as
soon as I put this on the market. And given this distribution you can do again this forward
computation you can compute the optimal price for your iPhone under this information and
you can forward propagate this uncertainty in this space where you can do things with
it. You can also do this if your belief is continuous so if your distribution over the
parameter space is a continuous distribution and the correct mathematical way to propagate
such a measure is the push forward of measures which is defined by this G hash mu and that's
exactly the same thing as in the discrete setting you just move mouse to the right side.
Okay a very easy example if you have a six sided die a usual board game die you throw
it twice what's the sum of those two dies and of course you don't know but you have
a distribution for that which is given by the green histogram. If this is not a fair
die sorry a fair die but if it's a cheetah's die if it's weighted on some side and it favors
the number six then your distribution is something different. Okay so that's how you propagate
uncertainty forward in time well there's no time but in the direction of this arrow but
we might also be interested in going the other way for example if you are given that the
sum of two dies is ten you might be interested in computing what the probability that the
first die actually showed the number two or five is. And that is a problem that many of
you have seen in high school or in a probability class of course you use this law called Bayes
law you can compute this and that's fine. So that Bayes law is the correct way of inverting
probabilistic relations in the continuous setting in discrete setting it works in almost
every setting. So you have probably seen this formula before let me just set some notation
Presenters
Zugänglich über
Offener Zugang
Dauer
00:58:19 Min
Aufnahmedatum
2020-11-03
Hochgeladen am
2020-11-12 09:52:16
Sprache
en-US